11 research outputs found

    Path Planning for Dexterous Mobility

    No full text
    In order to overcome a large variety of run-time constraints, robots are being designed to be more resourceful by incorporating more sensory and motor options for any given task. The added flexibility provides a basis for dexterous problem solving, but challenges planners by increasing the complexity of search. Moreover, the cost of functionally equivalent options can vary dramatically. In the worst case, naive approaches to planning avoid expensive actions until inexpensive options are explored exhaustively leading to poor overall search performance. We present a dexterous robot that introduces multiple types of locomotor actions with significant differences in cost and situational value and apply standard search techniques to demonstrate the additional challenges that arise in the context of dexterous mobility. Results highlight incentives, opportunities, and impact for overcoming these challenges. Additionally, we present a prototype for a path planner that uses environmental features to define an efficient set of subgoals for dexterous motion planning

    Postural Modes and Control for Dexterous Mobile Manipulation: the UMass uBot Concept

    No full text
    Abstract — We present the UMass uBot concept for dexterous mobile manipulation. The uBot concept is built around Bernstein’s definition of dexterity—“the ability to solve a motor problem correctly, quickly, rationally, and resourcefully ” [1]. We contend that dexterity in robotic platforms cannot arise from control alone and can only be achieved when the entire design of the robot affords resourceful behavior. uBot-6 is the latest robot in the uBot series whose design affords several postural configurations and mobility modes. We discuss these dexterous mobility options in detail and demonstrate the strength of dexterous mobility. I

    Choosing Informative Actions for Manipulation Tasks

    No full text
    Abstract—Autonomous robots demand complex behavior to perform tasks in unstructured environments. In order to meet these expectations efficiently, it is necessary to organize knowledge of past interactions with the world in order to facilitate future tasks. With this goal in mind, we present a knowledge representation that makes explicit the invariant spatial relationships between sensorimotor features comprising a rigid body and uses them to reason about other tasks and run-time contexts. I
    corecore